Goal of this Rmarkdown document

This Rmarkdown notebook contains a detailed documentation of how the analysis will be performed, provided with R code when informative. It not only contains the statistical model codes, but also the results they produce when applied to the pilot data. At the end of the section we report on a power analysis, providing some insights on how many participants we need to test.

Main Research Questions

Each of the analysis are tailored to provide statistically informed conclusions about the following research questions:

Initial descriptive checks of the data

Here we provide a descriptive overview of the syllable identifications relative to target (table 1). In the current pilot data the number of syllables identified by EasyAlign perfectly matched the targeted number of syllables, i.e., in 100% of the trials there were 0 differences in the number of syllable detected versus target.

(#tab:table01)
Table 1. A summary table of percentage of differences between syllables
syllable differences percentage
0 100.00

Table 2 provides the percentages of different type of L2 stress placement matches and mismatches.

(#tab:table02)
Table 2. A summary table of percentage of stress match/mismatch types
stress mis/match type stress difference percentage
L2 correct same 33.93
L2 incorrect & L1 match same 0.00
L2 incorrect & L1 mismatch same 16.07
L2 correct difference 44.05
L2 incorrect & L1 match difference 0.00
L2 incorrect & L1 mismatch difference 5.95

Main Confirmatory Analysis

For all analysis we will use mixed linear regressions with maximum likelihood estimation using R-package nlme. Our models will always have participant and trial ID as random variables. We will always try to fit random slopes, next to random intercepts. With the current pilot data however, adding random slopes resulted in non-converging models. Thus for all models reported we have participant and trial ID as random intercepts. We further report a Cohen’s D for our model predictors using R-package EMAtools. For interaction effects we will follow up with a post-hoc contrast analysis using R-package lsmeans, and we apply a Bonferroni correction for such multiple comparison tests.

Research question 1: Effect of gesture on stress timing

For the first analysis we simply assess whether the absolute difference in directional stress timing is different for the gesture versus no gesture condition, while also accounting for effects on timing due to stress L1/L2 difference and accentedness. If gesture improves stress timing, lower absolutized stress timings are to be expected (i.e., lower deviances from perfect synchrony).

Figure 3 upper panel. Effect of gesture versus no gesture on stress timing

We first construct a base model predicting the overall mean, with participant and trial ID random variables, and absolute stress timing as the dependent variable. This model is then compared to a model with stress difference + accentedness + gesture condition as main effects.

Code chunk 1. Model research question 1

D$accuracy <- abs(D$stressed_mistimingL2L1) #absolute deviation stress timing

#basemodel predicting the overall mean stress timing
model0 <- lme(accuracy~1, data = D, random = list(~1|ppn, ~1|target), method = "ML", na.action = na.exclude)

#alternative model with, stress, accentedness, and gesture versus no gesture as predictors
model1 <- lme(accuracy~stress+accent+condition, data = D, random =  list(~1|ppn, ~1|target), method = "ML", na.action = na.exclude)
anovcomp01 <- anova(model0, model1) #test difference basemodel versus model 1
sum1 <- summary(model1) 
Dmod1 <- lme.dscore(model1, D, type="nlme")

Click here for model 1 R output

```r
model1
```

```
## Linear mixed-effects model fit by maximum likelihood
##   Data: D 
##   Log-likelihood: -2026.702
##   Fixed: accuracy ~ stress + accent + condition 
##          (Intercept)     stressdifference accentaccent present 
##            67.964286           -19.797619             8.261905 
##     conditiongesture 
##           -21.285714 
## 
## Random effects:
##  Formula: ~1 | ppn
##         (Intercept)
## StdDev:  0.01303649
## 
##  Formula: ~1 | target %in% ppn
##         (Intercept) Residual
## StdDev:    57.98303 85.65074
## 
## Number of Observations: 336
## Number of Groups: 
##             ppn target %in% ppn 
##               2             168
```

```r
sum1
```

```
## Linear mixed-effects model fit by maximum likelihood
##  Data: D 
##        AIC      BIC    logLik
##   4067.404 4094.124 -2026.702
## 
## Random effects:
##  Formula: ~1 | ppn
##         (Intercept)
## StdDev:  0.01303649
## 
##  Formula: ~1 | target %in% ppn
##         (Intercept) Residual
## StdDev:    57.98303 85.65074
## 
## Fixed effects: accuracy ~ stress + accent + condition 
##                          Value Std.Error  DF   t-value p-value
## (Intercept)           67.96429  12.21253 167  5.565129  0.0000
## stressdifference     -19.79762  13.01534 164 -1.521099  0.1302
## accentaccent present   8.26190  13.01534 164  0.634782  0.5265
## conditiongesture     -21.28571   9.40139 167 -2.264103  0.0249
##  Correlation: 
##                      (Intr) strssd accntp
## stressdifference     -0.533              
## accentaccent present -0.533  0.000       
## conditiongesture     -0.385  0.000  0.000
## 
## Standardized Within-Group Residuals:
##        Min         Q1        Med         Q3        Max 
## -1.6119396 -0.4080404 -0.2752586 -0.1043269  4.2310610 
## 
## Number of Observations: 336
## Number of Groups: 
##             ppn target %in% ppn 
##               2             168
```

```r
Dmod1
```

```
##                               t  df           d
## stressdifference     -1.5210995 164 -0.23755583
## accentaccent present  0.6347823 164  0.09913635
## conditiongesture     -2.2641030 167 -0.35040310
```

Click here for model 1 summary

In our pilot data, the model with stress, accentedness, and gesture conditions as predictors outperformed the base model, change in Chi-sq (3) = 7.837, p = 0.050. The models results indicate a statistically reliable main effect of gesture vs. no gesture, b = -21.2857, t (167) = -2.2641, p = 0.025, Cohen’s D = -0.35. Stress difference was not a reliable main effect, b = -19.798, t (164) = -1.5211, p = 0.130, Cohen’s D = -0.238. Accent was not a reliable main effect, b = 8.2619, t (164) = 0.635, p = 0.526, Cohen’s D = 0.099.

Figure 3 lower panels. Effect of gesture versus no gesture ~ stress and accent

We will further assess this in a complex model we expand our analysis with the relevant stimuli conditions, as well as their interactions with the gesture condition. If the interactions are statistically reliable we will perform a post-hoc comparisons with R-package “lsmeans” with a bonferroni correction.

Code chunk 2. Model research question 1, with three way interaction

#alternative model with gesture versus no gesture as predictor
model2 <- lme(accuracy~condition*stress*accent, 
              data = D, random =  list(~1|ppn, ~1|target), method = "ML", na.action = na.exclude)

anova(model1, model2) #test difference basemodel versus model 1
##        Model df      AIC      BIC    logLik   Test  L.Ratio p-value
## model1     1  7 4067.404 4094.124 -2026.702                        
## model2     2 11 4073.784 4115.772 -2025.892 1 vs 2 1.620135  0.8052
#summary model 3 post hoc
sum3 <- summary(model2)
posthoc3 <- lsmeans(model2, list(pairwise ~ condition|accent|stress),  adjust="bonferroni")

Click here for model 3 summary

```r
sum3
```

```
## Linear mixed-effects model fit by maximum likelihood
##  Data: D 
##        AIC      BIC    logLik
##   4073.784 4115.772 -2025.892
## 
## Random effects:
##  Formula: ~1 | ppn
##         (Intercept)
## StdDev: 0.009792346
## 
##  Formula: ~1 | target %in% ppn
##         (Intercept) Residual
## StdDev:    58.03803 85.35406
## 
## Fixed effects: accuracy ~ condition * stress * accent 
##                                                            Value Std.Error  DF
## (Intercept)                                             67.40476  16.11977 164
## conditiongesture                                       -11.45238  18.85156 164
## stressdifference                                       -28.52381  22.79680 163
## accentaccent present                                     7.88095  22.79680 163
## conditiongesture:stressdifference                        0.02381  26.66013 164
## conditiongesture:accentaccent present                  -16.66667  26.66013 164
## stressdifference:accentaccent present                   20.45238  32.23954 163
## conditiongesture:stressdifference:accentaccent present  -6.04762  37.70312 164
##                                                          t-value p-value
## (Intercept)                                             4.181496  0.0000
## conditiongesture                                       -0.607503  0.5444
## stressdifference                                       -1.251220  0.2126
## accentaccent present                                    0.345704  0.7300
## conditiongesture:stressdifference                       0.000893  0.9993
## conditiongesture:accentaccent present                  -0.625153  0.5327
## stressdifference:accentaccent present                   0.634388  0.5267
## conditiongesture:stressdifference:accentaccent present -0.160401  0.8728
##  Correlation: 
##                                                        (Intr) cndtng strssd
## conditiongesture                                       -0.585              
## stressdifference                                       -0.707  0.413       
## accentaccent present                                   -0.707  0.413  0.500
## conditiongesture:stressdifference                       0.413 -0.707 -0.585
## conditiongesture:accentaccent present                   0.413 -0.707 -0.292
## stressdifference:accentaccent present                   0.500 -0.292 -0.707
## conditiongesture:stressdifference:accentaccent present -0.292  0.500  0.413
##                                                        accntp cndtn: cndt:p
## conditiongesture                                                           
## stressdifference                                                           
## accentaccent present                                                       
## conditiongesture:stressdifference                      -0.292              
## conditiongesture:accentaccent present                  -0.585  0.500       
## stressdifference:accentaccent present                  -0.707  0.413  0.413
## conditiongesture:stressdifference:accentaccent present  0.413 -0.707 -0.707
##                                                        strs:p
## conditiongesture                                             
## stressdifference                                             
## accentaccent present                                         
## conditiongesture:stressdifference                            
## conditiongesture:accentaccent present                        
## stressdifference:accentaccent present                        
## conditiongesture:stressdifference:accentaccent present -0.585
## 
## Standardized Within-Group Residuals:
##        Min         Q1        Med         Q3        Max 
## -1.5871742 -0.5052313 -0.2389013 -0.1052169  4.3243362 
## 
## Number of Observations: 336
## Number of Groups: 
##             ppn target %in% ppn 
##               2             168
```

Click here for posth-hoc3 output

```r
posthoc3
```

```
## $`lsmeans of condition | accent, stress`
## accent = no accent, stress = same:
##  condition lsmean   SE df lower.CL upper.CL
##  nogesture   67.4 16.1  1     -137      272
##  gesture     56.0 16.1  1     -149      261
## 
## accent = accent present, stress = same:
##  condition lsmean   SE df lower.CL upper.CL
##  nogesture   75.3 16.1  1     -130      280
##  gesture     47.2 16.1  1     -158      252
## 
## accent = no accent, stress = difference:
##  condition lsmean   SE df lower.CL upper.CL
##  nogesture   38.9 16.1  1     -166      244
##  gesture     27.5 16.1  1     -177      232
## 
## accent = accent present, stress = difference:
##  condition lsmean   SE df lower.CL upper.CL
##  nogesture   67.2 16.1  1     -138      272
##  gesture     33.1 16.1  1     -172      238
## 
## Degrees-of-freedom method: containment 
## Confidence level used: 0.95 
## 
## $`pairwise differences of condition | accent, stress`
## accent = no accent, stress = same:
##  3                   estimate   SE  df t.ratio p.value
##  nogesture - gesture     11.5 18.9 164 0.608   0.5444 
## 
## accent = accent present, stress = same:
##  3                   estimate   SE  df t.ratio p.value
##  nogesture - gesture     28.1 18.9 164 1.492   0.1377 
## 
## accent = no accent, stress = difference:
##  3                   estimate   SE  df t.ratio p.value
##  nogesture - gesture     11.4 18.9 164 0.606   0.5452 
## 
## accent = accent present, stress = difference:
##  3                   estimate   SE  df t.ratio p.value
##  nogesture - gesture     34.1 18.9 164 1.811   0.0719 
## 
## Degrees-of-freedom method: containment
```

Prosodic modulation of gesture

Does gesture vs. no gesture affect acoustic markers of stress? We perform a mixed linear regression with normalized acoustic markers as DV, and acoustic marker (peak F0, peak envelope, and duration) x condition as independent variable.

Figure 4. Effect of gesture vs. no gesture on acoustic markers of stress

Code chunk 3. Gesture and acoustic output

Dlong <- gather(D, "marker", "acoust_out", 13:15)

#alternative model with gesture versus no gesture as predictor
model0 <- lme(acoust_out~1, data = Dlong, random =  list(~1|ppn, ~1|target), method = "ML", na.action = na.exclude)
model1 <- lme(acoust_out~marker*condition, data = Dlong, random =  list(~1|ppn, ~1|target), method = "ML", na.action = na.exclude)
anova(model0, model1) #test difference basemodel versus model 1
##        Model df      AIC      BIC    logLik   Test  L.Ratio p-value
## model0     1  4 1951.533 1971.196 -971.7666                        
## model1     2  9 1534.941 1579.182 -758.4703 1 vs 2 426.5926  <.0001
#summary model 3 post hoc
anovamod0mod1 <- anova(model0, model1)
sum1 <- summary(model1)
posthocsum1 <- lsmeans(model1, list(pairwise ~ condition|marker),  adjust="bonferroni")
Dmod1 <- lme.dscore(model1, Dlong, type="nlme")

Click here for model 1 R output

```r
sum1
```

```
## Linear mixed-effects model fit by maximum likelihood
##  Data: Dlong 
##        AIC      BIC    logLik
##   1534.941 1579.182 -758.4703
## 
## Random effects:
##  Formula: ~1 | ppn
##          (Intercept)
## StdDev: 1.375216e-05
## 
##  Formula: ~1 | target %in% ppn
##         (Intercept) Residual
## StdDev:  0.08539542 0.506822
## 
## Fixed effects: acoust_out ~ marker * condition 
##                                     Value  Std.Error  DF   t-value p-value
## (Intercept)                     1.5512255 0.03977187 835  39.00308  0.0000
## markerpeakF0z                  -0.5074599 0.05546413 835  -9.14933  0.0000
## markersDURz                    -0.7038124 0.05546413 835 -12.68951  0.0000
## conditiongesture                0.1850414 0.05546413 835   3.33624  0.0009
## markerpeakF0z:conditiongesture -0.1713849 0.07843812 835  -2.18497  0.0292
## markersDURz:conditiongesture   -0.3458323 0.07843812 835  -4.40898  0.0000
##  Correlation: 
##                                (Intr) mrkrF0 mrkDUR cndtng mrkF0:
## markerpeakF0z                  -0.697                            
## markersDURz                    -0.697  0.500                     
## conditiongesture               -0.697  0.500  0.500              
## markerpeakF0z:conditiongesture  0.493 -0.707 -0.354 -0.707       
## markersDURz:conditiongesture    0.493 -0.354 -0.707 -0.707  0.500
## 
## Standardized Within-Group Residuals:
##         Min          Q1         Med          Q3         Max 
## -4.32238038 -0.52000609  0.00753372  0.54513919  4.64779877 
## 
## Number of Observations: 1008
## Number of Groups: 
##             ppn target %in% ppn 
##               2             168
```

Click here for model 1 summary for research question 2

Does gesture vs. no gesture affect acoustic markers of stress? We perform a mixed linear regression with normalized acoustic output as DV, and acoustic marker (peak F0z, peak envelope, and duration) x condition as independent variable. We again test this model against a base model predicting the overall mean.
The model with acoustic markers x condition was a more reliable model than the base model predicting the overall mean of the acoustic output, Chi-sq (5) = 426.593, p = 0.000. Table 3 provides an overview of the model predictors.

(#tab:table03)
Table 3. Model fitted predictions
Value Std.Error DF t-value p-value c(NA, Dmod1$d)
(Intercept) 1.55 0.04 835.00 39.00 0.00 NA
markerpeakF0z -0.51 0.06 835.00 -9.15 0.00 -0.63
markersDURz -0.70 0.06 835.00 -12.69 0.00 -0.88
conditiongesture 0.19 0.06 835.00 3.34 0.00 0.23
markerpeakF0z:conditiongesture -0.17 0.08 835.00 -2.18 0.03 -0.15
markersDURz:conditiongesture -0.35 0.08 835.00 -4.41 0.00 -0.31
We will further perform a post-hoc analysis disentangling these interaction effects, where we assess for which acoustic marker gesture vs. no gesture affected acoustic output.

Click here for posthoc model 1 output

```r
posthocsum1
```

```
## $`lsmeans of condition | marker`
## marker = peakAMPz:
##  condition lsmean     SE df lower.CL upper.CL
##  nogesture  1.551 0.0398  1    1.046     2.06
##  gesture    1.736 0.0398  1    1.231     2.24
## 
## marker = peakF0z:
##  condition lsmean     SE df lower.CL upper.CL
##  nogesture  1.044 0.0398  1    0.538     1.55
##  gesture    1.057 0.0398  1    0.552     1.56
## 
## marker = sDURz:
##  condition lsmean     SE df lower.CL upper.CL
##  nogesture  0.847 0.0398  1    0.342     1.35
##  gesture    0.687 0.0398  1    0.181     1.19
## 
## Degrees-of-freedom method: containment 
## Confidence level used: 0.95 
## 
## $`pairwise differences of condition | marker`
## marker = peakAMPz:
##  2                   estimate     SE  df t.ratio p.value
##  nogesture - gesture  -0.1850 0.0555 835 -3.336  0.0009 
## 
## marker = peakF0z:
##  2                   estimate     SE  df t.ratio p.value
##  nogesture - gesture  -0.0137 0.0555 835 -0.246  0.8056 
## 
## marker = sDURz:
##  2                   estimate     SE  df t.ratio p.value
##  nogesture - gesture   0.1608 0.0555 835  2.899  0.0038 
## 
## Degrees-of-freedom method: containment
```

Research question 3: Gesture-speech asynchrony as a function of trial conditions

From the previous analyses we should know whether stress timing performance and acoustic stress marking increases or decreases as a function of gesture, as well as the possible role of stress difference, and accentedness in stress timing. A further question is whether the timing between gesture and speech is affected by stress difference and accentedness, which would signal that gesture does not simply always synchronize with speech, but that coordination is destabilized due to difficulties of reaching the L2 targets without orthographic cues or with L1 stress competitor.
Using a similar linear mixed modeling approach as the previous analysis we compare a base model with models with stress difference and accentedness (and their possible interactions) as predictors for the absolutized gesture-speech asynchrony.

Figure 5. Gesture-speech (a)synchrony depending on stress difference and accentedness

Click here for posthoc model 1 and 2 output

```r
sum1
```

```
## Linear mixed-effects model fit by maximum likelihood
##  Data: subD 
##        AIC      BIC    logLik
##   2047.489 2066.233 -1017.744
## 
## Random effects:
##  Formula: ~1 | ppn
##         (Intercept)
## StdDev: 0.004390321
## 
##  Formula: ~1 | target %in% ppn
##         (Intercept) Residual
## StdDev:    103.4332 1.721462
## 
## Fixed effects: abs_asynchrony ~ stress + accent 
##                          Value Std.Error  DF   t-value p-value
## (Intercept)           88.90476  13.94886 164  6.373623  0.0000
## stressdifference      14.71429  16.10675 164  0.913548  0.3623
## accentaccent present -10.09524  16.10675 164 -0.626771  0.5317
##  Correlation: 
##                      (Intr) strssd
## stressdifference     -0.577       
## accentaccent present -0.577  0.000
## 
## Standardized Within-Group Residuals:
##          Min           Q1          Med           Q3          Max 
## -0.016668525 -0.011084262 -0.007009053  0.007736769  0.084032038 
## 
## Number of Observations: 168
## Number of Groups: 
##             ppn target %in% ppn 
##               2             168
```

```r
sum2
```

```
## Linear mixed-effects model fit by maximum likelihood
##  Data: subD 
##        AIC      BIC    logLik
##   2049.444 2071.312 -1017.722
## 
## Random effects:
##  Formula: ~1 | ppn
##         (Intercept)
## StdDev:   0.0044314
## 
##  Formula: ~1 | target %in% ppn
##         (Intercept) Residual
## StdDev:    103.4192 1.729301
## 
## Fixed effects: abs_asynchrony ~ stress * accent 
##                                          Value Std.Error  DF   t-value p-value
## (Intercept)                           87.21429  16.15363 163  5.399053  0.0000
## stressdifference                      18.09524  22.84468 163  0.792099  0.4295
## accentaccent present                  -6.71429  22.84468 163 -0.293910  0.7692
## stressdifference:accentaccent present -6.76190  32.30725 163 -0.209300  0.8345
##  Correlation: 
##                                       (Intr) strssd accntp
## stressdifference                      -0.707              
## accentaccent present                  -0.707  0.500       
## stressdifference:accentaccent present  0.500 -0.707 -0.707
## 
## Standardized Within-Group Residuals:
##          Min           Q1          Med           Q3          Max 
## -0.017022149 -0.010997239 -0.007258370  0.008047324  0.084164000 
## 
## Number of Observations: 168
## Number of Groups: 
##             ppn target %in% ppn 
##               2             168
```

```r
posthoc2
```

```
## $`lsmeans of stress | accent`
## accent = no accent:
##  stress     lsmean   SE df lower.CL upper.CL
##  same         87.2 16.2  1   -118.0      292
##  difference  105.3 16.2  1    -99.9      311
## 
## accent = accent present:
##  stress     lsmean   SE df lower.CL upper.CL
##  same         80.5 16.2  1   -124.8      286
##  difference   91.8 16.2  1   -113.4      297
## 
## Degrees-of-freedom method: containment 
## Confidence level used: 0.95 
## 
## $`pairwise differences of stress | accent`
## accent = no accent:
##  2                 estimate   SE  df t.ratio p.value
##  same - difference    -18.1 22.8 163 -0.792  0.4295 
## 
## accent = accent present:
##  2                 estimate   SE  df t.ratio p.value
##  same - difference    -11.3 22.8 163 -0.496  0.6205 
## 
## Degrees-of-freedom method: containment
```

Click here for model 2 summary for research question 3

For our pilot data, including stress difference and accentedness as predictors in an alternative model was not a more reliable than the base model predicting the overall mean of the absolutized gesture-speech (a)synchrony, Chi-sq (2) = 1.245, p = 0.537, and adding interactions between stress difference and accentedness also did not further improve predictions of gesture-speech asynchrony, Chi-sq (3) = 1.290, p = 0.732. Table 4 provides an overview of the model predictors for the model without interactions.

(#tab:table04)
Table 4. Model fitted predictions
Value Std.Error DF t-value p-value c(NA, Dmod1$d)
(Intercept) 88.90 13.95 164.00 6.37 0.00 NA
stressdifference 14.71 16.11 164.00 0.91 0.36 0.14
accentaccent present -10.10 16.11 164.00 -0.63 0.53 -0.10

Gesture-speech asynchrony and the directionality of error

From the previous analysis we will know if gesture-speech synchrony can be affected by trial conditions that may complicate correct stress placement. If indeed gesture-speech (a)synchrony is affected, we can wonder about how gesture and speech temporally diverge when they are more asynchronous. We will assess this by looking at the gesture-speech asynchrony when speech stress peak is correctly placed a) on the L2 target, b) when placed incorrectly on L1 target, or c) incorrectly placed on any other syllable. Figure 6 provides an example of our pilot data results where we report directional gesture-speech (a)synchrony when acoustic stress is correctly placed on the L2 target. We will assess whether gesture is attracted to be asynchronous with speech in the direction of the L1 stress competitor. We compare this directional (a)synchrony when there is a L1/L2 stress difference, versus when there is no stress difference. If there is an attraction of gesture towards L1, we would predict the positive effect of stress difference on the directional gesture-speech (a)synchrony as compared to no stress difference condition.
The current analysis is conditional on whether we find a) obtained an effect on gesture-speech synchrony for confirmatory analysis 3, and b) that we have at least 33% of the total responses for a particular response type (at least 33% of the trials of L2 correct, L2 incorrect-L1 match, and L1-L2 incorrect). In the current pilot data, for example, we have primarily L2 correct responses (~78%), so we would only analyze this response type. For each conditional analysis we perform a linear mixed regression analysis with participant and trial ID as random intercept, and stress difference as IV, and directional gesture-speech (a)synchrony as DV.

Code Chunk 4. Conditional analysis 3a/3b

#basemodel predicting the overall mean accuracy
model0 <- lme(asynchrony_L2L1~1, data = subD, random = list(~1|ppn, ~1|target), method = "ML", na.action = na.exclude)

#alternative model with gesture versus no gesture as predictor
model1 <- lme(asynchrony_L2L1~stress, data = subD, random =  list(~1|ppn, ~1|target), method = "ML", na.action = na.exclude)
anova(model0, model1) #test difference basemodel versus model 1
##        Model df      AIC      BIC    logLik   Test    L.Ratio p-value
## model0     1  4 2133.622 2146.118 -1062.811                          
## model1     2  5 2135.608 2151.228 -1062.804 1 vs 2 0.01353714  0.9074
summary(model1) 
## Linear mixed-effects model fit by maximum likelihood
##  Data: subD 
##        AIC      BIC    logLik
##   2135.608 2151.228 -1062.804
## 
## Random effects:
##  Formula: ~1 | ppn
##         (Intercept)
## StdDev: 0.008885091
## 
##  Formula: ~1 | target %in% ppn
##         (Intercept) Residual
## StdDev:    135.2493 2.397586
## 
## Fixed effects: asynchrony_L2L1 ~ stress 
##                      Value Std.Error  DF   t-value p-value
## (Intercept)      27.095238  14.84788 165 1.8248560  0.0698
## stressdifference  2.428571  20.99807 165 0.1156569  0.9081
##  Correlation: 
##                  (Intr)
## stressdifference -0.707
## 
## Standardized Within-Group Residuals:
##          Min           Q1          Med           Q3          Max 
## -0.085892616 -0.007144199 -0.002239972  0.007750989  0.057509243 
## 
## Number of Observations: 168
## Number of Groups: 
##             ppn target %in% ppn 
##               2             168

Power analysis

To provide some indication on the amount of data we need to collect to get meaningful results, we perform a power analysis concerning the first confirmatory research question. We will assess the power of a model with three main effects ( stress difference, accentedness, and gesture condition) on directional stress timing at an adjusted alpha of .05/3, and identify how many subjects we need to detect a main effects at a power of 80%. We use R-package ‘mixedpower’ that is designed to simulate data and power of linear mixed effects models from pilot data (see Kumle et al. 2021 for a tutorial). Table 5 shows the power estimates for the effects for N of 20 to 60 participants. It can be seen that for the effect of gesture and stress we already have enough power to detect an actual effect at N = 20 and higher. Given that accentedness is not as important of a variable as gesture condition and stress differences, we can conclude from this that an absolute minimum of 20 participants would suffice for ensuring a meaningful test of our confirmatory research question 1. Note however, that we will aim to collect 40 participants.
We further report in Table 6 the power calculations are given for a more complex model with a three-way interaction, and the lower order interaction effects between stress, accentedness, and gesture condition. It can be seen that three-way interactions are likely to have low power, but that two-way interactions could become more meaningful at N = 60. Therefore we will set our ideal upper bound at N = 60.

Table 5. Power analysis simple model

#for details on this power analysis see
#https://link.springer.com/article/10.3758/s13428-021-01546-0

#main DV of interest
D$accuracy <- abs(D$stressed_mistimingL2L1) #absolute deviation from stress from L2
D$ppn <- as.numeric(as.factor(D$ppn)) #random variable to extend 

#make a lme4 model instead of lme
model1 <- lmer(accuracy~accent+stress+condition +(1|ppn) + (1|target), data = D, na.action = na.exclude)


#Power analysis
power_model1 <- mixedpower(model = model1, data = D,
                        fixed_effects = c("condition", "stress", "accent"),
                        simvar = "ppn", steps = c(20,30,40,50, 60),
                        critical_value = 2.14441, n_sim = 500)
## [1] "Estimating power for step:"
## [1] 20
## [1] "Simulations for step  20  are based on  500  successful single runs"
## [1] "Estimating power for step:"
## [1] 30
## [1] "Simulations for step  30  are based on  500  successful single runs"
## [1] "Estimating power for step:"
## [1] 40
## [1] "Simulations for step  40  are based on  500  successful single runs"
## [1] "Estimating power for step:"
## [1] 50
## [1] "Simulations for step  50  are based on  500  successful single runs"
## [1] "Estimating power for step:"
## [1] 60
## [1] "Simulations for step  60  are based on  500  successful single runs"
#plot power analysis results
tab05 <- power_model1
  
apa_table(
  tab05
  , caption = "Power fixed effects for number of participants"
)
(#tab:unnamed-chunk-17)
Power fixed effects for number of participants
20 30 40 50 60 mode effect
accentaccent present 0.37 0.53 0.68 0.80 0.83 databased accentaccent present
stressdifference 0.99 1.00 1.00 1.00 1.00 databased stressdifference
conditiongesture 1.00 1.00 1.00 1.00 1.00 databased conditiongesture

Table 6. Three way interaction model

#make a more complex lme4 model with three way interaction
model2 <- lmer(accuracy~accent*stress*condition +(1|ppn) + (1|target), data = D, na.action = na.exclude)


#Power analysis 2
power_model2 <- mixedpower(model = model2, data = D,
                        fixed_effects = c("condition", "stress", "accent"),
                        simvar = "ppn", steps = c(20,30,40,50, 60),
                        critical_value = 2.14441, n_sim = 500)
## [1] "Estimating power for step:"
## [1] 20
## [1] "Simulations for step  20  are based on  500  successful single runs"
## [1] "Estimating power for step:"
## [1] 30
## [1] "Simulations for step  30  are based on  500  successful single runs"
## [1] "Estimating power for step:"
## [1] 40
## [1] "Simulations for step  40  are based on  500  successful single runs"
## [1] "Estimating power for step:"
## [1] 50
## [1] "Simulations for step  50  are based on  500  successful single runs"
## [1] "Estimating power for step:"
## [1] 60
## [1] "Simulations for step  60  are based on  500  successful single runs"
tab06 <- power_model2
  
apa_table(
  tab06
  , caption = "Power fixed effects for number of participants"
)
(#tab:unnamed-chunk-18)
Power fixed effects for number of participants
20 30 40 50 60 mode effect
accentaccent present 0.12 0.15 0.24 0.26 0.32 databased accentaccent present
stressdifference 0.91 0.99 1.00 1.00 1.00 databased stressdifference
conditiongesture 0.27 0.43 0.53 0.65 0.75 databased conditiongesture
accentaccent present:stressdifference 0.32 0.53 0.66 0.77 0.85 databased accentaccent present:stressdifference
accentaccent present:conditiongesture 0.30 0.42 0.61 0.67 0.75 databased accentaccent present:conditiongesture
stressdifference:conditiongesture 0.02 0.01 0.01 0.01 0.03 databased stressdifference:conditiongesture
accentaccent present:stressdifference:conditiongesture 0.03 0.03 0.05 0.06 0.08 databased accentaccent present:stressdifference:conditiongesture